![]() SYSTEM FOR PERFORMING SYNCHRONIZED DATA PROCESSING THROUGH MULTIPLE NETWORK COMPUTING RESOURCES, MET
专利摘要:
SYNCHRONIZED DATA PROCESSING THROUGH NETWORK COMPUTING RESOURCES. The present invention relates to systems (100, 1000), methods and products of programming, or others of instruction, interpretable by machine for the management of data processing through multiple network computing resources (106, 1106). In particular, the description concerns the synchronization of related requests for data processing using distributed network resources. 公开号:BR112012013891B1 申请号:R112012013891-0 申请日:2010-06-08 公开日:2020-12-08 发明作者:Daniel Aisen;Bradley Katsuyama;Robert Park;John Schwall;Richard Steiner;Allen Zhang;Thomas L. Popejoy 申请人:Royal Bank Of Canada; IPC主号:
专利说明:
TECHNICAL FIELD [001] The present disclosure relates in general to systems, methods, and machine-interpretable programming or other instructional products for managing data processing through multiple network computing resources. In particular, the disclosure concerns the synchronization of related requests for data processing using distributed network resources. [002] Aspects of the material disclosed in this patent application refer to the holding, transfer, and / or administration of negotiable papers and other financial interest. Aspects of such ownership, transfer and / or administration may be subject to regulation through government and other agencies. The disclosure here is made only in terms of the possibilities of logic, programming, and communication, without taking into account statutory, regulatory or other legal considerations. Nothing herein is intended as a statement or representation that any system, method or process proposed or discussed here, or the use thereof, whether or not complies with any statute, law, regulation, or other legal requirement in any jurisdiction; nor should it be considered or interpreted in this way. BACKGROUND [003] In various forms of networked or otherwise distributed data processing systems, complex and / or multiple related processes are often routed to multiple computing resources for execution. For example, in financial and other commercial systems, purchase orders, sales orders, and other financial interest transactions are often routed to multiple market or exchange servants. In such cases it may be advantageous for orders or other data processing requests routed to multiple servers, or other resources, to be executed simultaneously, or as concurrently as possible, or to be executed in any way synchronized to the contrary desired, or time sequence . [004] For example, it was observed that the service fees for orders related to financial interests executed in electronic networked markets decrease significantly when such orders are served in an unsynchronized manner in multiple markets. It was also observed that the decline in the service rate increases when such orders are routed to an increased number of electronic markets. This is at least in part due to delays in the execution of subsequent portions of such orders after their first components are served: when an order has been executed in one market ahead of another, the intervening time period is sometimes used for price manipulation by parties trying to maximize short-term profits from offers: when a first segment of an order has been served, automatic changes in terms of offers or proposals in the parallel markets can be implemented, leading to previously published positions being revoked and subsequent business being prevented . [005] For example, when a large order is routed to multiple exchanges (for example, based on the liquidity available in each market), orders tend to arrive at the fastest exchanges (that is, those having the least inherent latencies) before reach the slower pockets (that is, those with more inherent latencies), and thus appear in books from different pockets at different times. When orders start to appear in the books of the fastest exchanges, other parties can detect the orders and try to take advantage of the latency on the slowest exchanges by canceling, changing, and / or otherwise manipulating quotas (for example, proposals and offers) or other market parameters on the slower exchanges, effectively increasing the implicit costs of the exchange's operations. As a result, orders that may otherwise have been executed on any single exchange at a high service ratio tend to exhibit a lower overall service ratio when routed to multiple exchanges as a financial market offshoot. [006] Prior art documents, such as Rony Kay's article "Pragmatic Network Latency Engineering, Fundamental Facts and Analysis", attempted to address such problems by proposing the elimination of unidirectional communication latencies (ie, "package"). Such systems did not address arbitrage opportunities and other problems caused or facilitated by variations in the time required for multiple processors to execute individual portions of execution requests from multiple processors (ie, execution latencies), in addition (or as part of) the latencies of Communication. SUMMARY [007] In several respects, the invention provides computer-executable systems, methods, and instruction mechanisms (for example, non-transient machine-readable programming structures) such as software-encoded instruction sets and data, for managing data processing through multiple network computing resources. In particular, for example, the invention provides systems, methods, and coded instruction sets useful in controlling the synchronization of related requests by processing data using distributed network resources. [008] For example, in a first aspect, the invention provides systems, methods, and programming or other machine-interpretable instructions for performing synchronized processing of data through multiple network computing resources, such systems, for example, comprising at least a processor configured to execute machine-interpretable instructions and induce the system to: [009] receiving signals from one or more data sources representing instructions for executing at least one data process executable by a plurality of network computing resources; [0010] dividing at least one data processing into a plurality of data processing segments, each data processing segment to be routed to a different one from a plurality of networked processors; [0011] based at least in part on the running latencies of the previous data processing requests routed by the system for each of the plurality of network execution processors, determining a plurality of time regulation parameters, each of the plurality of time regulation parameters to be associated with a correspondent of the plurality of data processing segments, the plurality of time regulation parameters determined to effect synchronous execution of the plurality of data processing segments by the plurality of network execution processors; and [0012] use the time regulation parameters associated with the plurality of data processing segments, routing the plurality of data processing segments to the corresponding plurality of network execution processors. [0013] In some modalities, as will be explained here, network execution processors may, for example, comprise stock exchange servers, and the data processing segments represent requests for financial interest trading such as goods and / or intangible interest such as shares, bonds, and / or various forms of options. [0014] The plurality of determined time regulation parameters can be used in determining and implementing the time regulation sequences to implement the desired sequential execution of the data processing requests according to the invention, and can, for example, represent and / or be completely or partially based on latencies in execution of data processing requests due to many factors. For example, such parameters can be completely or partially based on dynamically monitored latency (s) in execution of signal processing requests previously routed by the system to at least one of the plurality of execution processors in network. Such latencies can be caused by many factors, including, for example, various types of communication and data processing delays. Such time regulation parameters may further promote, based on statistics, for example, probability, the observed latency data models, and their patterns. [0015] Such systems, methods, and programming or other machine-interpretable instructions can be further configured to induce a system to: [0016] associate with each of at least one of the plurality of data from data processing segments representing at least one quantity term, the at least one quantity term representing at least one amount of a financial interest to be negotiated according to a request represented by each of the at least one data processing segment, and at least one corresponding price term associated with each such quantity term, the quantity term representing at least one proposed price to which a trade represented at least one data processing segment is to be performed; [0017] the at least one term of quantity greater than at least an amount of the financial interest offered publicly at a price equivalent to the corresponding associated price term, in a market associated with the network execution processor (s) to which the at least one data processing segment is to be routed. [0018] Such quantity terms may, for example, be determined based at least in part on the commercial histories associated with the market (s) associated with the execution processor (s) in network to which the data processing segments are to be routed. They can be determined from data relating to displayed or unviewed offers or trades, including, for example, historical excess or reserve quantities not shown. [0019] In other respects, the invention provides systems, methods, and programming or other machine-interpretable instructions for performing synchronous processing of data through multiple network computing resources, such systems, for example, comprising at least one configured processor to execute machine-interpretable instructions and induce the system to: [0020] monitor the execution of requests for execution of signal processing by each of the plurality of network computing resources; [0021] determine at least one time regulation parameter associated with a latency in the execution of signal processes between the system and each of the plurality of network computing resources; and [0022] to store at least one time regulation parameter in machine-readable memory accessible by at least one processor. [0023] Monitoring the execution of requests for execution of signal processing according to such modalities, and others, of the invention can be implemented on an uninterrupted, periodic, and / or other appropriate or desirable basis. [0024] In various embodiments of the various aspects of the invention, network computing resources may include one or more stock exchange servers. Data sources may include one or more systems or servers of brokers or operators, controlled signal processes may represent financial interest trades, and the execution of signal processing execution requests represents the execution of financial interest transactions, including , for example, shares, bonds, options and contract interest, bills of exchange and / or other intangible interest, and / or commodities (commodities). In such modalities, requests to perform data processing procedures can be completely or partially based on parameters including, for example, any one or more of current market data quotes, order routing rules, order characteristics orders, displayed liquidity of each network computing resource, and a likely delay, or latency, in executing a number of orders on each network computing resource. [0025] In the same and other respects, the invention provides systems to control or otherwise manage requests for data processing through distributed computer resources, such systems including one or more processors configured to execute instructions to induce the system to: [0026] monitor the execution of requests for execution of signal processing by each of the plurality of network computing resources; [0027] determine at least one time regulation parameter associated with latency in the execution of signal processes between the system and each of the plurality of network computing resources; and [0028] to store at least one time regulation parameter for each of the plurality of network computing resources. [0029] Among the many advantages offered by the invention is the possibility of monitoring latencies and other factors in network processing of multi-part data processing requests or other complex on a dynamic, or 'rolling' basis, and using such dynamically monitored latencies and / or other factors in determining the timing parameters to be used in implementing synchronized processing requests, as revealed here. The timing parameters used in the implementation of synchronized processing requests can be monitored and / or determined on a continuous, uninterrupted, periodic, or other basis, depending on the needs, objectives, and other factors of the applications in which they are to be applied . [0030] Another advantage offered by the invention is the reduction or elimination of the need to consider unidirectional communication latencies, for example, the need to minimize latencies in communications between routing and execution processors. [0031] As will be appreciated by those skilled in the relevant techniques, once they are familiar with this disclosure, the synchronization of execution of distributed data processing requests, for example, by synchronized transmission of requests for such processing, has many possible applications in a large number of data processing fields. BRIEF DESCRIPTION OF THE DRAWINGS [0032] Reference will now be made to the drawings that show through the exemplary modalities of the present revelation. [0033] FIGS. 1A, 1B, and 3 show examples of systems suitable for processing data through multiple network computing resources according to various aspects of the invention. [0034] FIGS. 2 and 4 show flowcharts that illustrate examples of methods for performing data processing through multiple network computing resources according to various aspects of the invention. [0035] FIG. 5 shows an exemplary histogram that can be used in an exemplary method to manage data processing through multiple network computing resources according to various aspects of the invention. [0036] FIGS. 6A and 6B show a comparison of the reasons for service using an exemplary method and system for data processing through multiple network computing resources versus using a conventional method and system. [0037] FIG. 7 illustrates the use of a metric example to compare an exemplary method and system for processing data across multiple network computing resources versus results of using a prior art method and system. [0038] Throughout the attached drawings, equal characteristics are identified by equal reference numerals. DESCRIPTION OF EXEMPLARY MODALITIES [0039] In this disclosure, as will be understood by those skilled in the relevant techniques, 'synchronized' means according to any desired timing sequence, whether regular, irregular, and / or completely or partially simultaneous. [0040] Figure 1 shows an example of a system 100 suitable for performing data processing through multiple network computing resources according to the invention. [0041] In the example shown, system 100 includes one or more signal or data sources 102 (comprising one or more of each of the sources 102a, 102b), execution router processor (s) 104, and one or more network computing resources, or execution processors, 106. In some embodiments, data sources 102 may include one or more internal data sources 102a that can communicate directly with router 104 (for example, over network (s) local (s) or private long distance (s)) or other secure communication with or without wires, through direct communication channel (s) or through communication (s) within a single server). In the same and / or other modalities, the data source (s) 102 can also include one or more external data sources 102b that can, for example, communicate with the processor (s) ( s) of router 104 through one or more public networks 108 (for example, a public or private telecommunications network such as the internet), using suitable or otherwise desired network security devices may include, for example, data, etc. In the example shown, router processor (s) 104 communicate with each of the one or more execution resources, or computing, on network 106 through a network 110, which may be the same or different than the network (s) 108. [0042] In various modalities, the source (s) 102 may include devices that provide, on behalf of one or more entities that generate requests for negotiation and / or other data processing, signals that communicate the data and / or instructions related to the execution of data processing processes for router 104 processor (s), whose data and / or instructions router 104 processor (s) can process (e.g. , aggregate by adding, weighting, etc .; and / or dividing into segments, etc.) and use (m) as bases for requests for data processing by network computing resources 106. Data sources 102a, 102b may include, for example, systems, servers, processors and / or any / any other suitable source (s) of requests to perform data processing tasks such as offers and / or proposals for the purchase of goods, financial interest intangible assets, etc., and / or other data processing tasks, such as commu word, image, and / or other document processing tasks. Each or any of the source (s) 102, processor (s) 104, and resources 106 may include several such systems, servers or processors. [0043] In various modalities, some or all of the source (s) 102 and router processor (s) 104 can be combined, and / or otherwise configured to implement multiple programming or others machine instruction applications operating on isolated machines. [0044] Network computing resources 106 may include any devices or other resources that communicate with router processor (s) 104 to receive and perform any of a very wide variety of data processing requests . Such network computing resources 106 may include systems, servers, processors or any other suitable device adapted to perform any process suitable for use in implementing the invention, including, for example, processing offers or proposals for the purchase of goods, financial interest, etc., and / or other data processing tasks, such as word or document processing, images, and / or other communication or documentation tasks. [0045] In various modalities, one or more data sources 102 transmit or otherwise supply the router processor (s) 104 signals representing instructions, or requests, to perform data processing functions. Instructions for any / any given data source (s) 102 may include instructions for signal processes to be performed by any one or more network computing resources 106. Required signal processes may include, for example, computing operations, data manipulation, and / or communications processes or other signal exchanges, among others. In some, but not necessarily all, examples, such instructions may specifically identify network computing resource (s) 106 particularly targeted for the execution of such processes. [0046] Router processor (s) 104 can analyze instruction signals received from one or more source (s) 102 and use such signals to prepare instructions, or requests, to be sent to the plurality of processors. execution 106, for executing data processing and / or other signal processes according to the instructions received. Analysis of such instructions may include, for example, identifying the type of process (es) to be (are) required, including, for example, the volume or quantity of an order or proposal for a trade or a quantity of processing of documents to be done, and the type, nature, and / or identity (s) of the network computing resource (s) 106 to be required to perform, and thus associated with, a certain data processing and / or other signal processing request. [0047] For example, to increase signal efficiency and / or other data processing functions, router processor (s) 104 can analyze, classify, and aggregate instructions or requests received from multiple sources 102 for relatively minor execution requests into one or more larger requests for processing, and further dividing such aggregated request (s) into pluralities of smaller requests to be distributed to the plurality of execution processors 106, depending, for example, on the actual ability of execution processors 106 to satisfy or complete such processed requests. [0048] For example, the multiple sets of instruction signals received from different data sources 102a, 102b can be associated with (for example, directed to release and execution) individual network computing resource (s) 106, and such instructions may be aggregated in requests to perform a simple signal process for such network computing resource (s) 106. In some instances, the identification of network computing resource (s) 106 to be (s) submitted to the task with a given signal processing request can be performed after aggregation. For example, multiple instructions from different data sources 102a, 102b can be classified or otherwise associated with a simple signal or data process, and such instructions can be aggregated, and aggregated instructions can be associated with one or more resources ( s) network computing identified (s) 106, so that one or more signal process requests can consequently be prepared for the identified network computing resource (s) 106. Such analysis , classification, and / or identification can be performed according to predetermined rules or algorithms (for example, based on continuous or current processing capabilities of one or more specific network computing resource (s) 106) , and according to the requirements encoded in the instructions or otherwise provided by the source (s) of origin 102, where relevant. [0049] As another example, simple instruction sets for data processing can be divided through processor (s) 104 and distributed to a plurality of resources 106 for distributed execution. For example, a relatively large order to trade on one or more financial interest payments that originate from a single source 102a, 102b, might need to be distributed to multiple exchange servers 106 to be fully serviced; in such cases the request (s) from one or more source (s) 102 may be divided through processor (s) 104 into appropriate orders for execution by a plurality of such resources 106. [0050] Targeted, or specifically identified, network computing resources / execution processors 106, communicate with router processor (s) 104 to receive segmented requests for signal process execution and can execute after that consequently. Execution of such signal processes may include, for example, performing a text or image processing operation, a mathematical computation, or a communication signal exchange, among others. [0051] As will be easily understood by those skilled in the relevant techniques, various components of the system 100 can be combined, or implemented in the form of separate systems or devices. In a wide variety of configurations, such combined or separate subsystem (s) can be operated by the same or different entities. As a particular example, one or more request sources 102 can be integrated, or otherwise associated, with individual router (s) 104. [0052] An example of an application of a system 100 for distributed execution of segmented processing requests according to the invention is provided by a financial system 1000 adapted for processing requests for data processing representing negotiations and / or offers for negotiations, or other transactions, in tangible and / or intangible financial interest such as shares, bonds, bills of exchange (for example, foreign exchange), various forms of natural resources or commodities, options, loans, etc. As shown in Figures 1A and 1B, for example, in a financial transaction data processing system 1000 according to the invention, the signal or source (s) 102 may include market system (s) 1102 that may (m) include, for example, dealer / broker systems or servers as well as any other source of offers, proposals, or other financial interest transactions such as currently provided by known financial trading platforms. In various embodiments, such market systems 1102 can be referred to as order-origin systems. [0053] Source systems of order 1102, 102a may include systems operated by or on behalf of, for example, entities owned or otherwise controlled by parent control organizations or others such as banks or brokers. Source order systems 1102, 102b may, for example, include systems operated by or on behalf of brokers or other trading entities that act, for example, on behalf of individual investors, trading through or with the help of independently controlled banks , institutional investors, and / or other brokers. [0054] Router processor 104 (s) in such modalities may (for example) include, for example, server (s) or other system (s) 1104 that communicate with the market systems 1102, 102 , for example, through the receipt and transmission of encrypted electronic signals representing requests for data processing representing the execution and / or recognition of transactions in financial interest; and that communicates with the broker, exchange, or other market systems or execution processor (s) 1106 for the execution of such transactions. In such embodiments, a processor 104 may be referred to as a Smart Order Router or Tactical Hybrid Order Router (in both cases, "SOR") 1104, 104. An SOR 1104 may, for example, include one or more ports ( s) 1122 and / or router (s) 1124 to facilitate communications through router (s) 1104 with one or more commercial systems 1102, 102 directly (for example, through wired communication, using one or more dedicated communication channel (s), or through communication within a simple server) and / or indirectly (for example, through wireless communication, through a 108, 1108 network or through an intermediary server) . Stock market or systems 1106, or other execution processor (s) 106 may be in communication with the SOR (s) 1104 through, for example, a network 110, 1110, such as the internet or another public network that may be the same as the 1108 network. [0055] For a modality of a system 100 configured as a financial trading or order execution system 1000, the required and executed signal processes provided by the source (s) 102 may represent negotiations or other transactions in financial interest. Such transactions may include, for example, trades and / or offers for trades, or other transactions, in financial interest such as stocks, bonds, bills of exchange (for example, foreign exchange), various forms of natural resources or commodities, options, loans, etc .; and the network computing resources 106 may be, for example, stock exchange servers 1106, examples of which may include automated or electronic market systems. [0056] As will be well understood by those versed in the relevant techniques, a (sub) SOR system, or processor, 1104 receiving such sets of transaction request signals can apply a wide variety of processes to the (s) request (s). For example, where the signal sets represent requests for financial interest transactions, the required transactions can be aggregated, either over time and / or across multiple transaction request sources 1102; and / or transaction processing requests at one or more interest rates can be divided to route to multiple execution handlers or processors 1106, individually or in batches. [0057] In various modalities, as described here, the order source (s) 102, 1102 can be implemented together, or as part of the order router (s) 104, 1104. It will be readily understood by those skilled in the relevant techniques that any or all of the various components of the system (s) 100, 1000, including, for example, any or all of the processors 102, 104, 106, and methods of operation. according to the disclosure here, may be implemented using any device, software, and / or firmware configured for the purposes disclosed here. A wide variety of components, hardware and software, as well as firmware, is now known to be suitable, when used alone and / or in various combinations, to implement such systems, devices, and methods; undoubtedly others will be developed from now on. [0058] Examples of components suitable for use in implementing examples of systems 100, 1000, and the various processes disclosed here, including, for example, processes 200 in Figure 2 and 300 in Figure 4, include, for example, server-class systems such as the IBM x3850 M2 ™, the HP ProLiant DL380 G5 ™ HP ProLiant DL585 ™, and HP ProLiant DL585 Gl ™. A wide variety of other processors, including in some embodiments, desktop, laptop, or palm model systems will do. [0059] An example of a method 200 for processing a set of transaction request signal generated by a source of transaction request signal 102, 1102 suitable for implementation by router processor (s) 104 such ), such as an SOR 1104 from a 1000 system, is shown in Figure 2. [0060] Process 200 of Figure 2 can be considered to start at 202, with receipt through the processor (s) 104, 1104 of signals representing a request for data processing such as, for example, a transaction in one or more financial interest. In modalities of systems 100, 1000 comprising routing processor (s) of SOR 1104 adapted to process signals representing requests to execute trades and / or other transactions in financial interest received from source (s) of transaction signal 1102, signal sets representing requests to perform transactions on one or more financial interests may include signs or signal sets representing, for example, one or more identifiers representing: ■ the source (s) of the request, such as such as a URL or other network address or identifier used or otherwise associated with a trading system 102, 1102; ■ the interest (s) to be traded or otherwise traded, such as an identifier used by one or more exchanges to identify a stock, a CUSIP number for a promissory note, a set of bills of exchange to be exchanged, etc .; ■ a type of transaction (for example, purchase, sale, proposal, offer, etc.) to be executed or required; ■ one or more quantities (that is, quantities or volumes) of the interest (ies) to be traded (including, for example, any total and / or reserved quantities); and ■ corresponding price terms. [0061] Other parameters may include, for example, current and / or historical: ■ meet the probability for multi-party or segmented transaction requests (ie, the historical proportion of multi-party orders that result in completed transactions); ■ amounts of difference (spread) between, for example, bid and ask prices, for example, current and / or with respect to historical trends in the spread; ■ market volatility in specific interest to be traded, or related or corresponding interest (ies), or related benchmarks or indices; ■ depth of the market book (s), for example current depth with respect to historical trends in depth; ■ reserve quantities; ■ display quantities; and ■ display size and ballast, for example, on the buy and / or sell sides. [0062] In other embodiments, such sets of signals may comprise content and / or identifiers representing images, text, or other content or be processed by one or more execution processors 104, 1104, and specific execution requests. [0063] Among the many types of 1106 market systems suitable with various modalities of the invention are alternative trading systems (ATSs) of the type known as ‘dark exchanges’ or ‘dark pools’. Typically, such exchanges do not display market offers openly to members of public trading. The use of known or predictive quantities of reserves can be especially useful in such modalities. [0064] Thus an example of a data record to be provided by a source 102, 1102 to request a transaction at a given interest, under stated terms, may include: [0065] <source (102, 1102) of the request> <transaction type> <interest identifier> <quantity (s)> <price term (s)> [0066] Sets of signals received by processors 104, 1104 at 202 may be stored in any / any volatile and / or persistent memory (s), as appropriate, for archiving and / or other processing purposes. processing. [0067] At 204, the transaction or other data processing execution requests received at 202 can be analyzed through router processor (s) 104, 1104 to place them in any form suitable or desired for use in the preparation of one or more sets of instructional signals to be provided to execution processor (s) 106, 1106. Analysis of instructional signals may include, for example, identifying the type of transaction (s) or process ( s) to be (are) required, including, for example, volumes and / or quantities of orders or offers for trading in specified interest (s), and whether such volumes are to be bought or sold , or offered for sale or purchase; quantities and / or types of document processing to be done; and the type and nature of the network computing resource (s) or execution processor (s) 106 to be required to run and thus be associated (s) ) with such execution or processing instructions. In various modalities, the instruction sets analyzed can be stored in temporary or volatile memory (s) 118, 1018 accessible by the corresponding processor (s) 104, 1104 for aggregation with other processing requests, split to route to multiple execution processors / resources 106, 1106, and / or batch preparation and routing or other delayed execution requests. [0068] The instructions received in 202 can be accumulated during defined time intervals, regular or irregular, such as the duration of a working day or any segment of it, or any / any other period (s) of time desired (s), which can be prefixed (s) and / or dynamically determined (s) through processor (s) 104, 1104. instructions can also be processed individually, when received. If further instructions are to be received before processing, or can potentially be received, process 200 may revert to 202. [0069] Transaction requests / instructions can be accumulated during defined time intervals, such as the length of a business day or any segment thereof, or a desired period of time, which can be prefixed and / or dynamically determined by processor (s) 104, 1104. If further instructions are received, or potentially can be received, process 200 may return to 202. [0070] In modalities of the invention that employ classification / aggregation techniques in the analysis or otherwise, another order of preparation or other processing requests, in 206 the processor (s) 104, 1104 can (m) repeat the process 202 - 204 until all necessary or desired sets of related or aggregable processing request signals have been received from source (s) 102, 1102. For example, as described above, arbitrary numbers of data records representing orders or requests to purchase obligations identifiable by CUSIP (Committee on Uniform Security Identification Procedures) numbers can be received from source (s) 102, 1102, and stored in memory 118, 1018 associated with ( s) processor (s) 104, 1104, for batch processing, as follows: [0071] Upon individual receipt, or at a given periodic rate, a given time, when a given number of orders has been received, when all desired orders have been received, or when any other desired criteria are met, the processor (s) 104, 1104 may, as part of the analysis instructions or otherwise processing in 204, classify and / or group the stored records according to any one or more desired criteria, for example, by type of transaction request and interest identifier, as follows: [0072] As shown, several data fields in the transaction request records can be reordered or otherwise re-formatted when necessary or desired, to suit the processing needs of the routing processor (s) 104, 1104. For example, as shown, the association of a data item from the associated "source" or otherwise agreed with a different priority, to facilitate efficient sorting while allowing processor 104 (s), 1104 report the fulfillment of transactions / requests upon completion of order processing. [0073] Process 204 may also include aggregation by the processor (s) 104, 1104 of the transaction requests received and classified, in order (s) harvested or consolidated for specific types of transactions in specific interest (s), for example, adding the total or subtotal amounts associated with the corresponding transaction requests, as follows: [0074] When all the desired signal sets have been received in 202, and optionally classified, accumulated, and / or otherwise processed in 204, in the 104 processor (s) 104, 1104, using the instruction sets processed in 204, can (m) prepare sets of execution request signals for transmission to resource / execution processors 106, 1106. Such sets of execution request signals can comprise any necessary or desirable signals to effect processing required, including content and data and command signals. For example, in the modalities of the invention adapted to process requests for financial interest transactions, the bids can be classified and / or aggregated on the basis of interest (s) to be traded, the amounts of interest ( s) to be (are) traded (s), price, etc., and associated with appropriate execution command signals. The form of any execution command signals associated with a given request may depend, as those skilled in the relevant techniques will recognize, on the nature and type of requests to be executed and the processors 106, 1106 through which they are to be executed , as well as any networks 110, 1110 through which the signals exchanged between processor (s) 104, 1104 and 106, 1106 are to be sent, including applicable protocols and instruction formatting requirements. Therefore, data belonging to any or all systems 106, 1106, 104, 1104, and 110, 1110, protocols used by them, and / or information related to the interest negotiated, offered, or described can be accessed and used by ( s) processor (s) 104, 1104 by analyzing and preparing instructions for performing processing by any of the processors or resources 106, 1106. Sources 1126 of such data may include, for example, stock market data system 1126v (Figure 1b) that, for example, in modalities of the invention adapted for processing financial transactions, it may include information received from various systems of exchange 1106, news information sources such as Bloomberg or Reuters, and / or other sources. [0075] It is sometimes necessary or desirable to set up bids for data processing using network processing resources, including many resources configured for use in the execution of financial transactions, breaking execution requests and / or other multi-part processing. Such parts, or segments, may, for example, correspond to portions of larger orders or other data processing requests, to be performed by a plurality of networked resources 106 such as stock exchange servers or other execution or mani processors. -pullers 1106. For example, if a plurality of stock exchange servers or other markets are available to execute a transaction request representing a purchase order for a significant amount of a financial interest such as a stock or promissory note, it may be necessary or desirable to divide the order into multiple parts, for execution in multiple markets and / or through multiple 1106 exchange servers. For example, sufficient amounts of specific interest may not be available, at all or at desirable prices, in a single exchange: to satisfy an order completely, it may be necessary or desirable to break an isolated order into smaller segments and route it to pocket the multiple. [0076] Thus, for example, in various modalities of the invention aimed at processing requests for transactions in financial instruments, when a router 104,1104 is required by one or more sources 106, 1106 to complete a transaction in one or more financial interests, router 104, 1104 can, in preparing the set (s) of signals representing requests for transactions, access the available information from sources such as source (s) of market data 1126, as well as any one or more execution processor (s) 106, 1106, to determine the quantities of such interest available by the respective processors 106, 1106 and the terms under which such quantities are available, and can construct a set of signals execution request configured to route each of the respective processors 1106, 1106 desired, based on the number of quantities available under the most favorable terms. [0077] For example, continuing the example above, it may be necessary or desirable to split one or more incoming processing requests into smaller parts, directed to a plurality of grants to obtain fulfillment of the complete order (s) ). This can be accomplished, for example, by accessing the data representing the current order books provided by one or more of stock exchange servers 1106 and dividing the order (s) accordingly, using known data processing techniques. Thus, for example, the aggregate order 'sale No. CUSIP AA' above can be divided into portions or segments and associating with the data representing such segment URLs or other network resource address identifiers suitable for use in routing the various segments for a plurality of stock exchange servers A1 - C3, as desired, as follows: [0078] As will be appreciated by those skilled in the relevant techniques, the execution of individual portions of a distributed transaction or another request for processing multiparty data such as a financial interest transaction placed on multiple exchanges by a plurality of network resources, such as market or exchange servers 1106 or other execution processors 106, typically require different amounts of time. That is, if multiple parts of a desired transaction execution request are sent simultaneously to a plurality of exchange execution processors 106, 1106, each part or segment of the transaction request can be expected to execute in time at a different point. This is because the amount of time, or 'latency', required to transmit the order request signals from the router (s) of order 104, 1104 to the different resource or execution processors 106, 1106 along a 110, 1110 network or other communications path; for current processing of the corresponding portions of the execution request by the corresponding processors 106, 1106; and / or for confirmatory feedback or other data to the order router (s) 104, 1104 typically varies, depending on several factors, including, for example, the network paths between the router (s) ) 104, 1104 and execution processors 106, 1106; the amount of network traffic that is processed by the network (s) 110, 1110; the number of requests that are handled by the individual execution processors 106, 1106, etc. [0079] For various reasons it may be important, in such cases, to synchronize the execution of two or more portions of a request for execution of multiparties. As an example, when an execution request represents a request to execute multiple parts of a financial transaction in multiple markets or on multiple exchanges, the non-synchronized, staggered execution of individual portions of the transaction across multiple corresponding servers can affect both the possibility to complete the last portions of the transaction and / or the terms under which such last portions can be completed. [0080] A particular example of the desire to synchronize execution requests can be illustrated by reference to Figure 3. In the example shown in Figure 3, system 100, 1000 comprises order router 104, 1104 and a plurality of network execution 106, exchange servers or execution processors 1106 "Exchange 1", "Exchange 2", "Exchange 3". In addition, system 100, 1000 in Figure 3 still comprises a colocalized trading server 304 configured to execute trades or other transactions on execution resource 1106 "Exchange 1". As noted in the Figure, the colocalized trading server 304, which employs a relatively low latency trading algorithm, is associated with Exchange 1 in such a way that it can execute transactions with Exchange 1 in a relatively short period of time compared to amount of time required for other processors, such as router (s) 104, 1104, to complete similar transactions with Exchange 1. For example, the colocalized server 304 can be communicatively linked with Exchange 1 by direct wired connection , or other fast processing system. In addition, Bolsa 1 is able to complete an execution request with non-colocated processor (s) 104, 1104 in a relatively shorter period of time (ie, with a "lower latency"), this that is, Exchange 2 or Exchange 3. In other words, as shown in Figure 3, Latency Time X <Time Y and Time X <Time Z, as an execution time for a transaction between the colocalized server 304 and the Exchange 1 is less than any of Time X, Time Y, and Time Z. [0081] If, for example, signals representing a request to trade in one or more financial interest are received by a router processor 104, 1104 from one or more request sources 102, 1102, and the request is of such quantity or the magnitude that an order reflecting the request will be too large to be fulfilled completely by any of Bags 1, 2, or 3, order router 104, 1104 can attempt to inspect the availability on the various available processors 106, 1106 and split the order consequently, to route a portion of it to each of Stock Exchange 1, Stock Exchange 2, and Stock 3. If router 104, 1104 of Figure 3 simultaneously transmits to each of the execution processors 106, 1106 Stock 1, Stock 2, and Exchange 3 a split portion or segment of the request to execute the requested transaction, it is possible that the trading server 304 (which is capable, for example, of being operated by a high frequency trading entity, or another speculative investor) is able to service a portion of that transaction on Exchange 1, for example, acting as a counterparty to the proposed transaction by selling or buying all or a portion of the transaction request sent to that exchange by order router 104, under the stated terms in the request for the transaction, and have time to change or otherwise extend the terms to meet the remaining portions of the order on Exchange 2 and / or Exchange 3, on terms more favorable to the third party than the transaction (s) available - level (s) (for example, the party operating or acting through server 304) that those offering such transactions (for example, those behind orders provided by the request processor (s) 104, 1104) could otherwise have sought. In other words, for example, the colocalized trading server 304 may, due to the difference in execution latencies associated with trading with Exchange 1, Exchange 2, and Exchange 3, be able to handle a portion of the transaction requested on Exchange 1 and move to improve its terms, for example, by raising or lowering its proposed / offered price, to serve the remaining portions of the transaction on Exchange 2 or Exchange 3 before such remaining portions could execute the previously declared prices, to increase their own profits of the operators or beneficiary (ies), or the profits of other dealers offering similar interest on those Exchanges. [0082] As can be seen in Figure 3, such possibilities (which can be referred to as 'latency arbitration' opportunities) can exist when: time X + Time A <Time Y and / or Time X + Time B <Time Z [0083] It will be appreciated by those versed in relevant techniques that, even where the transaction or other processing request signals are sent simultaneously to each of the Bags 1, 2, 3 of the router (s) 104, 1104 , the time required for each split portion of the request to be received, recognized, and / or processed by the respective resources 106, 1106 (for example, Times X, Y, Z) may in general be different, for example due to differences in trajectories network communication and processing speeds on any or all of the processor (s) 104, 1104 and / or 106, 1106. Similarly, the time required for trading server 304 to change the terms of the transaction offers on each one of Bags 2 and 3 may in general differ. [0084] Among the disadvantages that may arise in such cases is that the traders represented by the request source (s) 102, 1102 may pay higher prices in the execution of their negotiations than they would otherwise have, in the absence of such arbitration opportunities; or, if prices on subsequent exchanges change sufficiently to place them outside the terms stated in their execution requests, they may not be able to complete transactions in desired quantities - for example, all or part of a transaction routed to a Stock processor 1106 may not trade due to an altered price. [0085] In such instances where a trading instruction may not be fully satisfied on a 1106 exchange server due, for example, to price or other term manipulation by a third party who takes advantage of latencies, to proceed with processing requests. data on one or more exchange servers 1106 can be useful for timing or scheduling the sending of trading requests to multiple exchange servers 1106 so that the execution of such trading requests to all exchange servers 1106 occurs at a time. synchronized manner, such as, for example, in a substantially concurrent manner. In particular, it may be useful to synchronize the execution of signal processing execution requests, or their portions or segments, across multiple network computing resources 106, 1106, for example, so that signal processes are received, recognized, and / or executed by resources 106, 1106 in a substantially concurrent manner. [0086] In some examples it may not be necessary for the signal processes to be performed on each processor 106, 1106 to be performed simultaneously, but it may be sufficient that: Time Y - Time X <Time A, and / or Time Z - Time X <Time B, [0087] so that the execution of the request (s) or segments of the same occurs before any change in the terms, can be implemented by a trading server 304. The use of such synchronized regulations can, for example, cause: Time X + Time A> Time Y and / or Time X + Time B> Time Z [0088] and thus, for example, nullify latency arbitration opportunities. In some embodiments, therefore, the invention provides router (s) 104, 1104 with the ability to execute transactions across multiple resources 106, 1106 with minimal or no time variance, so that the algorithms run by negotiator (es) ) 304 employing low latency algorithms are given insufficient time to react to market changes. [0089] Thus, in these and other cases where synchronization is desired, in 210, the processor / router 104, 1104 can determine the absolute or relative regulations to be assigned, or otherwise associated with several portions or segments of a request for execution to obtain the desired sequencing. Such settings can be determined to effect any desired synchronization: for example, settings configured to perform simultaneous, or substantially simultaneous, can be determined, or settings configured to effect any desired sequencing can be determined. [0090] Thus in 210, a time regulation parameter can be determined for each execution signal processing request, or its portion, to be assigned to each respective network computing resource 106, 1106. The parameters are determined in such a way as to cause synchronized execution of the signal processing execution requests in each of the respective network computing resources 106, 1106. This determination can be based at least in part on a certain corresponding latency in the execution time of such (s) request (s) and / or portion (s), such as, for example, any or all of the latencies A, B, X, Y, Z of Figure 3, and / or any other relevant latency, in performing signal exchanges between router processor (s) 104, 1104 and each of the network computing resources 106, 1106, or in processing other such signals by any of such devices. [0091] Arbitration and other problems caused by variations in execution time between servers can also be minimized or eliminated by reducing absolute latencies in the transmission and execution of processing requests. Thus, the determination of the time regulation parameters as described above can be practiced in combination with the procedures that also serve to minimize the absolute amounts of time associated with the execution and / or reporting of the execution requests through the ) resource (s) 106, 1106. [0092] Information about the specific latencies used in the termination of the time regulation parameters to be associated with the various portions of a multi-part execution request provided by the router (s) 104, 1104 for a plurality of running processors 106, 1106 may include timing information (for example, transmission delays, signal propagation delays, serialization delays, queuing delays, and / or other processing delays on the processor (s) of router 104, 1104, on the network computing resource 106, 1106, and / or on the network (s) 110, 1110, 108, 1108). Such information can be provided or received from any / any source (s), and can be stored and retrieved from one or more data stores 214. Time data storage (s) 214, in various modalities, can (m) ) include databases or other data structures residing in the associated memory (s) 118, 1018 or otherwise accessible through router processor (s) 104, 1104. For For example, if the execution of a portion of an execution request associated with a first network computing resource 106, 1106 has a certain latency longer than that associated with a second network computing resource 106, 1106 (as per example, in the case of Exchange 1 vs. Exchange 2 and 3 in Figure 3), the timing of the portions associated with requests for a transaction request to be routed by these two network computing resources 106, 1106 can be determined so that an execution request, or its portion, associates with the first network computing resource 106 is determined in time to be sent earlier than the request associated with the second network computing resource 106, with the goal of having the requests executed substantially simultaneously on the two network computing resources 106, or within a minimum effective time A or B associated with possible term manipulation by a trading server 304. [0093] In some modalities, one or more algorithms that can use, for example, a latency probability model or another predictive model, can be used in determining the time regulation parameters to be associated with the portions of requests for executions to be routed to multiple execution processors 106, 1106, based on information associated with such communication and / or processing delays, or latencies. For example, a rolling average of historical, accumulated, or relevant latency data for any desired device, time periods, or other timing considerations can be used to predict an expected latency for the execution of a data processing request. . [0094] An example of an algorithm suitable for the termination of the time regulation parameters to be associated via router (s) 104, 1104 with the bidding portion (s) for execution provided by the source (s) 102, 1102, where it is desired to effect concurrent or otherwise synchronized arrival of such portions or requests on network resources 106, 1106, is based on an average latency between the transmission of request from router (s) 104, 1104 and an appropriate timing reference. Such reference (s) of time regulation may, for example, include start of processing by the corresponding target resource (s) 106, 1106, and / or receipt by the request processor (s) 104, 1104 of a confirmation signal generated by the resource (s) 106, 1106 after receipt of the request and / or completion of the execution of the request. For example, in some modalities, it may be advantageous to measure latencies between transmission for a given resource 106, 1106 and receipt by the router (s) 104, 1104 of a confirmation or acknowledgment signal, or other signal of appropriate response 1260, of such resource 106, 1106, and use of such measured latency (s) in determining the timing parameter (s) at 210. [0095] Process step 210 can be, for example, performed by an executed application, or a module, or otherwise associated with routing processor (s) 104, 1104 such (an) as an entity or module capital management 1126 in the case of a financial system 1000. Determination of a time regulation parameter to be associated with each part or segment of a request for execution of multiparties may, for example, include the use of a logic module of adaptive exchange response latency learning (RTL) 1126c, as shown in Figure FIG. 1B. Referring to Figure 3, such an adaptive exchange RTL learning & compensation logic module 1126c can determine the timing for each signal processing request (for example, a negotiation request) as follows: 1) For each portion or segment n of a m-part multipart processing request, a time T1x, not provided, for example, by a clock associated with processor (s) 104, 1104 is stamped in time via the (s) processor (s) 104, 1104 at a desired defined point within the process of analyzing or generating the transaction order (s), or other X processing request (s), and is associated with one record (s) of the processing request signal set corresponding to each part or segment n of the m-part request X. 2) T2x, n for each portion n of the m-part request X is stamped by (s) ) processor (s) 104, 1104 when the nth set of portion request signal matches wave was received at target exchange 106, 1106, and a confirmation message generated by the corresponding exchange was received by request routing processor 104, 1104. 3) During the course of a trading day (or other processing period) data), process steps 2 and 3 can be repeated, and T1x, n and T2x, n correspondents determined for each transaction segment routed to a given execution processor 106, 1106. 4) For each segment of portion n of a request subsequent pending multipart execution time Y, the determined time adjustment parameter RTLy, n = ∑ (T2x, n - T1x, n) / Z, where Z is the number of previously executed order segments routed to a processor execution data 106, 1106 used in the calculation. [0096] Where the time setting data (s) 214 store (s) a scrolling record of past time setting parameters (e.g., a plurality of certain time setting parameters RTLy , n) associated with one or more execution resources 106 / bag server 1106, such data can be used to create a scrolling histogram that can be used to predict current or cumulative latency for each resource 106 / bag server 1106. Because such predictions are based on a record of continuous change ("scrolling"), this process can be referred to as "online learning". There may be a component (for example, a memory component or bag latency histogram processing, not shown) within the 1126c adaptive exchange RTL compensation & learning logic module responsible for this. [0097] An adaptive exchange RTL learning & compensation logic module 1126c can use predictive latencies to de-terminate the appropriate timing parameters to be used in the transmission of trade requests (or other data processing) for multiple exchange servers 1106 to compensate for differences in execution latencies associated with such exchange servers 1106, in a way that reduces, controls, minimizes or eliminates differences in the execution time regulation of portions of split trading requests routed to different exchange servers 1106, and thereby, for example, reduces or eliminates opportunities for latency arbitration by opportunistic traders. [0098] Adaptive RTL module (s) 1126c can (m) use a variety of algorithms in determining the timing parameters suitable for use in performing synchronization of multipart processing requests. For example, such a module can use latency values determined for the various exchanges to determine the extent to which the router (s) 104, 1104 should think about the different exchange latencies by sending to the various processors 106, 1106 their corresponding portions of a request to process, for example, at different times. This can minimize the delay between the completion of the execution of each portion, for example, minimizing the difference in time between the receipt of each respective portion by its corresponding execution resource 106, 1106. (In Figure 3, for example, this would be shown minimizing the differences between the times elapsed in Time X, Time Y and Time Z). Such algorithms can also rely on historical differences in the time required to execute trading or other processing orders on the various resources 106, 1106, in addition to communication delays. [0099] Adaptive exchange RTL learning & compensation logic module (s) 1126c can additionally collect information about the market conditions prevailing on each 1106 exchange server (using, for example, data sources such as currency market data source (s) 1126v), wave orders / runs, current latencies and target latencies (for example, as predicted above) when trading requests are submitted. There may be a component within the adaptive exchange RTL learning & compensation logic module 1126c responsible for this. [00100] One or more timing parameters associated with the execution request to be routed to any one or more of the execution processor (s) 106, 1106 may also be provided to the processor (s) corresponding routing (s) 104, 1104 (for example, for storing time data 214), or determined by such processor (s) 104, 1104 using the related data provided by any one or more feeds market data (s) or by the 1126 processor (s) (including for example, any one or more of the 1126a - 1126g and / or 1126v processors or (sub) systems), and / or via the ( s) processor (s) 106, 1106 by themselves. [00101] In 212, the various portions of the request (s) to perform signal processing optionally aggregated and divided are sent to the respective network computing resources 106 according to with the parameters of time regulation or sequence (s) determined or otherwise acquired in 210. After that, the request (s), or the various portions of the same (s) ( s), can be performed by the respective execution resources 106, 1106, with subsequent communications and signal processing when necessary or desired. As will be understood by those skilled in the relevant techniques, once they are familiar with this disclosure, once the parameters of a desired execution request have been determined by router (s) 104, 1104, the signals representing those parameters can be assembled, using known or specialized data processing techniques; formatted in accordance with the Financial Information Exchange protocol (FIX) and / or any / any other desired protocol (s); and transmitted (s), written (s) or otherwise communicated to the corresponding execution processor (s) 106, 1106 using known or specialized signal communication techniques, and executed ( s) according to the requested transaction or other data processes. [00102] For example, continuing the example above, time delay delays, or X'Y'Z 'parameters, one or all of these can be equal to zero or any other suitable time period, can be determined accordingly with the above disclosure and associated with the order segments generated by processor (s) 1104 to purchase 77,000 lots of No. CUSIP AA bonds at price A, with 25,000 lots (18,000 + 7,000) in reserve at prices D and E, respectively, in this way [00103] Thereafter, the routing processor (s) 104, 1104 can (s) process the transaction segments using timing parameters, for example, delays X ', Y', X ', to effect the corresponding transaction segments to be transmitted or otherwise supplied at exchanges 106, 1106 A1, B2, C3 for execution according to a desired timing sequence, for simultaneous execution or otherwise desired sequential. [00104] Following the execution of all or as many portions of the routed transaction or processing segments, the routing processor (s) 104, 1104 can receive from the processor (s) corresponding execution data (s) 106, 1106 confirming or otherwise indicating such execution, and accessing the data records stored in the associated memory (s) can allocate execution results to the source (s) ) of request 102, 1102. [00105] Reference is now made to FIG. 4, showing an example of a method 300 of determining the time regulation parameters to be used in managing data processing through multiple network computing resources 106. In the modality shown, method 300 is an iterative method, and each iteration of method 300 is denoted as N. Method 300 is suitable for implementation using, for example, any of several modalities of systems 100, 1000 and components thereof, including particularly the router processor (s) 104, 1104 and the source (s) 1126. [00106] At 302, each of a plurality of network computing resources 106, 1106 is monitored, for example through router processor (s) 104, 1104, exe processor (s) - execution 106, 1106, of the external processor (s) 1126, and / or various components or modules operated or otherwise associated with them, for latencies associated with receiving and / or executing processing execution requests signs. This can be accomplished, for example, by a monitoring module (for example, an exchange RTL measurement module 1126b, such as for financial system 1000) on router processor (s) 104, 1104. Such monitoring may comprise, for example, time-stamped exit requests for data processing, and comparing the receipt times of the confirmation (s) or processing results for the corresponding time-stamped exit request. The difference in time between the request for output and the confirmation of receipt of input and / or the results of data processing can be defined as a latency of processing data or signals, and stored in memory accessible by it. Router processor (s) 104, 1104. Due to timing differences between outbound requests and incoming receipts, acknowledgments, and / or results, such latencies can be monitored on an uninterrupted, periodic basis, and / or other dynamic. [00107] In 306, at least one time regulation parameter associated with the latency (s) observed in the execution of the signal processing requests provided to the monitored resources 106, 1106 by the processor (s) routing (s) 104, 1104 is determined. As described here, such timing parameter (s) may include, for example, latencies due to communication delay, such as transmission delays or other signal propagation delays, and / or processing delays, among others. Typically, the corresponding time setting parameter (s) is / are determined for each of the plurality of network computing resources 106, 1106 to which a transaction order or another data processing request, or a portion thereof, is expected to be sent to routing processor (s) 104, 1104. [00108] In various modalities, such as in various forms of the financial systems 1000, and depending on the types of system (s) to be used and the desired processing results, such time regulation parameters can be determined for unidirectional and / or bidirectional communications between routing processor (s) 1104 operated or on behalf of a capital management entity and exchange server 1106; that is, from generating a multiparty transaction request through the routing processor of the capital management entity 1104 to receive a response, such as confirmation of receipt of a part of a larger trading order and / or confirmation of execution of all or part of a required negotiation, of the execution resource to which the processing request was directed. With reference to FIG. 1B, for example, and explained above, an RTL measurement may include latencies due to any or all signal transmission within the capital management entity 1104 server, signal processing within the capital management entity 1104, transmission of the signals between the capital management entity 1104 and a network 1110, transmission of the signals within the network 1110, transmission of the signals between the network 1110 and the targeted exchange server 1106, and processing of the signals within the exchange server 1106; for both communications sent from routing processor (s) 104, 1104 and responses (for example, acknowledgment of communication, rejection of a trade request, confirmation of a trade request, etc.) sent from the stock exchange server 106, 1106. In such modalities, the time setting parameter (s) can simply be the total time for response communication, or a statistical or other mathematical function. [00109] For example, an exchange RTL measurement module 1126b, such as that associated with SOR 1104 shown in FIG. 1B, you can determine a time setting parameter as follows: 1) A time stamp value T1 is associated by processor (s) 1104 with a new communication M1 (for example, a negotiation request) sent to a bag server 1106. 2) a time stamp value T2 is associated by processor (s) 1104 with any response to the M1 request received from bag processor 1106 to which the M1 request was sent. This response can be any response such as acknowledgment, rejection, total or partial fulfillment, etc., and may depend on the nature of the request represented by M1. 3) The RTL associated with the M1 request is calculated as the difference between T2 and T1. In some modalities, as noted above, RTL can be calculated as an average of the time (T2 - T1) for a number passed Z (for example, 30) of the processing requests routed to each of a plurality of (s) targeted pouch processor (s) 1106. [00110] In 308, the time setting parameter (s) associated with each network computing resource 106 can be stored in the storage (s) ) time data 214. As described here, a time data store 214, in some examples, may be a database or other data structure that resides in an associated memory or otherwise accessible by ( s) router processor (s) 104. Time setting parameter (s) stored in time setting data storage (s) 214 can be used in processes such as those described above with respect to process block 210 of Figure 2. [00111] Time setting parameter (s) determined by the processor (s) 104, 1104 can (for example) represent rolling histogram (s) representing the latencies associated with the processors individual executions 106, 1106 and / or other components of the system (s) 100, 1000. [00112] FIG. 5 shows an example of a histogram illustrating the stored data representing the communications associated with the processing latency and / or other processing values associated with an execution processor 106, 1106 in a system 100, 1000. In the example shown, response latency times (in ms) are stored for the most recent 30 transaction requests or other communications with a certain execution server 106. Although the example shows 30 latency times being stored, the parameter number (s) storage time regulation (s) used to determine RTLs or other time regulation parameters may be higher or lower, and may vary according to conditions such as time of day, season, etc. Calculation results based on stored latencies, and other related data, can also be stored in timekeeping data storage (s) 214. For example, in the example of FIG. 5, in addition to the raw latency times, a rolling average or a rolling mode of the 30 (or other suitable number) past latency times associated with communications and / or other processing with or by each execution server 106 can also be calculated and stored in time setting data storage (s) 214. [00113] As will be easily understood by those skilled in the relevant techniques, other factors, including for example desired fixed compensation or delays, or scheduling factors associated with the time of day, day of the week, season, etc., patterns of trading or other known data processing, economic conditions, etc., can be used in 210 to determine the timing parameters. [00114] The timing parameters determined in 210 can be used by routing processor (s) 104, 1104 to synchronize the execution of processing requests originated by source (s) 102, 1102 and directed (s) to processor (s) 106, 1106, for example, associating with such requests, or portions of them to be sent for execution by each of the multiple processor (s) 106, 1106, data items usable by processor (s) 104, 1104 to communicate requests to the corresponding processor (s) 106, 1106 in desired absolute or relative times, to achieve the desired synchronization of arrival of requests to the corresponding execution processor (s) 106, 1106. For example, using data items configured to communicate one or more portions of requests in the given time (s) ) according to a clock associated with processor (s) 104, 1104, processor (s) 104, 1104 p ode (s) to make the request (s) or portion (s) of request to be communicated (s) at a desired time of the day, or in any desired order or relative sequence without regard to time current day, but preferably with respect to each other or some third index. [00115] At 310, N is increased by one or another suitable value, or the control is otherwise returned to 302 so that the process 302 - 308 continues. Process 302 - 310 optionally continues until a desired maximum number of retries has been completed, or until all requests for transactions or other order processing have been processed (for example, routed to execution processors 106, 1106), or until other suitable criteria have been met. [00116] To assist operators and users of the system (s) 100, 1000, or components thereof, understand or evaluate the effect of the method and system revealed to effect data processing through multiple network computing resources, in some aspects, the present disclosure also provides several metrics (for example, trading benchmarks, in the case of a 1000 financial system) that can be determined by, and using data generated from any or all of the various components of a 100, 1000 system. [00117] Reference is now made to FIG. 6, which shows comparisons of the results of the transmission of multiparty trading execution requests for plurality of network computing resources or execution processors 106, 1106 according to an example of the revealed method and system, for the results of the requests negotiation of conventionally transmitted multiparties. [00118] FIG. 6a shows the results of executing a multi-party transaction request using the revealed methods and systems to obtain synchronized execution (in the illustrated case, substantially simultaneous) of the various parties or segments 624 of the multi-party transaction request (a sales order) by a plurality of stock exchange servers 106, 1106. In the example shown, a 94% fulfillment rate for an original aggregate order was achieved at the original offered price 630 of $ 4.21 (shown as "Level 1"). In a second transaction cycle (which was attended to in one transaction, as shown in 626) the remaining volume was sold at a less desirable but acceptable 632 price of $ 4.20 (shown as "Level 2"). The cost associated with orders served below the required order price (ie, those at Level 2 orders) was $ 53,000 for market systems 1102 (for example, client systems) and $ 10,049 for capital management entity 1106. [00119] In FIG. 6b, using prior art trading methods and systems, an unsynchronized multiparty trading request (multibolsuit sales order) consisting of multiple unsynchronized order segments 624 'for the same general transaction request resulted in a fee of 47% initial attendance at the preferred order 630 price of $ 4.21 (shown as "Level 1"). An additional 43% of the request was subsequently served at the less desirable 632 price of $ 4.20 (shown as "Level 2"), with the remainder being served at a reduced 634 price of $ 4.19 (shown as "Level 3"). [00120] Using the methods and systems according to the disclosure, a volume weighted average selling price (VWAP) 636 of $ 4.2094 / share, was realized, as shown in 628. Using the prior art methods and systems, a VWAP 638 for $ 4.2094 / share was made. [00121] As will be easily understood by those skilled in the relevant techniques, systems 100, 1000 can comprise devices or components suitable to provide a wide variety of other metrics and functionalities. For example, reference is now made to FIG. 7, which illustrates two examples of the provision by a routing processor 104, 1104 or another processor of a benchmark comparison with respect to an average market price provided, for example, by a market news service or other source of market data 1126v. In 646, the performance of a 100, 1000 system in synchronized processing of a multi-party transaction request according to the invention is compared to a market performance indicator "Average Price Reference Point". Such an average price reference point, or another reference point or metric factor, can be obtained, for example, from any or all components 1126, 1106, etc. At 644, the performance of a 100, 1000 system in non-synchronized processing of a multi-party transaction request according to the prior art methods is compared to the same market performance indicator "Average Price Reference Point". Comparison of comparisons 646, 644 indicates that processing of transactions according to the invention provides better results for financial interest seller. As will be understood by those skilled in the relevant techniques, a wide variety of reference points can be used in evaluating the performance of the systems and methods according to the invention. Such reference points can be determined at least partially by the nature of the system 100, 1000 used, and the types of transactions or other execution requests processed by that system. [00122] In the modality shown in Figure 1B, the source (s) 1126 of data usable by the processor (s) 104 in the preparation of the financial transaction or other requests for execution of data processing includes (in ) a plurality of 1126a-g modules useful in preparing a multipart execution request. In the example shown, 1126a-g modules include 1126a market data processing module, 1126b bag response latency measurement module, 1126c adaptive bag response latency & RTL compensation module, module 126d smart scan sharing allocation logic, 1126e smart launch logic module, 1126f regional & national exchange access logic module, and 1126g aggressiveness management module. [00123] Market data processing module 1126a receives and processes market data, which may be the same or different from the data provided by the market data module of exchange 1126v on exchange server 1106. Sources of such data may be internal to system 1104, or external, as needed or desired, and may include any suitable private or publicly available data source useful in preparing the execution request, and particularly such requests that are useful in splitting or otherwise preparing a transaction order: information provided may, for example, include the numbers or quantities and / or prices available on any particular exchange; historical trading volumes or prices; current and historical depth of the market (s) or liquidity; reserve sizes; spread of absolute, relative and / or average price; and heuristic specific to the stock or interest; and / or trends in any or all of these. [00124] RTL measurement module of bag 1126b determines the time regulation parameters for use in synchronizing the execution of multipart trading or other data processing requests by pluralities of bag servers 1106, as for example explained here, using the statically defined latency data representing the time (s) elapsed between sending requests or other data, and receiving confirmation or execution results from the individual execution processor (s) 106 , 1106. [00125] The adaptive exchange RTL measurement module 1126c determines the time regulation parameters for use in synchronizing the execution of multipart trading or other data processing requests by plurality of exchange servers 1106, such as example explained here, using dynamically defined latency data ("scrolling") representing the elapsed times between sending multiple processing requests, or other data, and receiving confirmation or execution results from the processor (s) ( s) of individual execution (s) 106, 1106. Histograms and other data models and / or structures representing such rolling data can be used by module (s) 1126c to determine the timing parameters according to such Law Suit. [00126] Intelligent scan action allocation logic module 1126d includes a statistical model for strategically oversizing transaction requests, and / or associating reserve quantity (s) with publicly posted orders, based on data from historically observed market. This module 1126d de-terminates, for example, an appropriate oversizing (that is, overordination in a trade request) taking into account the forecasted quantity (s) of hidden reserve to be (in) incorporated (s) in an open order) on a 1106 exchange server, based on statistical data about the hidden reserve available on that 1106 exchange server in a given period or under other specified conditions (for example, the past 30 trading requests) . Based on such predictive hidden market reserves, an appropriately sized hidden reserve can be determined, and associated with a transaction order, to result in a strategic oversizing of the publicly viewable order and helps to ensure that a current desired trading volume be carried out. [00127] 1126e Smart Launch Logic Module includes a statistical model to determine the likelihood of calls (ie percentage of satisfaction from a trade request) expected to be made on trading requests routed to individual stock exchange servers 1106. Such Statistical models can, for example, include historical service data perceived on such individual exchanges in a given period (for example, the past 30 trading requests, last month, previous 12 months, etc.). An 1126e smart launch logic module can take into account factors including, for example, the depth of the book top to each server on the 1106 exchange, the level of volatility across the 1106 exchange servers and the latency time - average company to execute a negotiation request, among other factors. [00128] 1126f regional & national exchange access logic module provides information about how a trading request should be routed to a 1106 exchange server, depending on whether the 1106 exchange server is regional or national. The data stored internally and / or externally with respect to the appropriate protocol (s) to be used, regulations to be observed, etc., can be used in providing such data. Such data can be used, for example, to ensure that negotiation or other processing requests sent to external resources 106, 1106 by routing processor (s) 104, 1104 are properly formatted in view of the ) resource (s) 106, 1106 to which the request (s) is / are provided, and ensuring that such request (s)) obey (es) all applicable legal standards. [00129] Aggressiveness management logic module 1126g includes a probability model for determining the probability of a percentage of attendance by individual bag servers 1106, and modifying the execution requests routed to such servers accordingly. Such a module 1126g can take into account factors such as, for example, the service charge on each exchange server 1106, the depth of the book on each exchange server 1106, and the levels of volatility across the exchange servers 1106 , among other factors. [00130] Although the disclosure has been provided and illustrated with respect to the specific modalities, presently preferred, many variations and modifications can be made without abandoning the spirit and scope of the invention (s) disclosed here. Disclosure and invention (s), therefore, are not to be limited to the exact components or details of methodology or construction set out above. Except to the extent necessary or inherent in the processes, no particular order for the steps or stages of the methods or processes described in this disclosure, including the Figures, is intended or included. In many cases the order of the process steps can be varied without changing the purpose, effect, or meaning of the methods described. The scope of the claims is to be defined only by the attached claims, giving due consideration to the doctrine of equivalents and related doctrines.
权利要求:
Claims (31) [0001] 1. System for performing synchronized data processing through multiple network computing resources (106, 1106), the system characterized by the fact that it comprises at least one processor configured to execute machine-interpretable instructions and induce the system to: receive the from one or more data sources, signals representing instructions for executing at least one data process executable by a plurality of network computing resources (106, 1106); dividing at least one data process into a plurality of data processing segments, each data processing segment to be routed to a different one than a plurality of networked processors to execute a respective portion of the hair least one data process; based at least in part on the latencies in the execution of previous data processing requests routed by the system to each of the plurality of network execution processors, to determine a plurality of timing parameters, each of the plurality of regulation parameters of time to be associated with a correspondent of the plurality of data processing segments; and according to a timing sequence using the timing parameters associated with the plurality of data processing segments, route the plurality of data processing segments to the plurality of corresponding network execution processors, the plurality of time regulation parameters determined to effect the arrival or synchronized execution of the plurality of data processing segments by the plurality of network execution processors. [0002] 2. System according to claim 1, characterized by the fact that at least one of the plurality of determined time-setting parameters is determined based at least in part on the dynamically monitored latency in the execution of signal processing requests routed by the system to at least one of the plurality of networked processors. [0003] 3. System according to claim 1 or 2, characterized by the fact that at least one of the plurality of determined time regulation parameters is determined based at least in part on at least one of: communication delay and processing delay. [0004] 4. System according to claim 1, characterized by the fact that at least one of the plurality of determined time-setting parameters is determined based at least in part on a latency probability model. [0005] 5. System according to any one of claims 1 to 4, characterized by the fact that networked processors comprise exchange servers and the data processing segments represent requests for trading in financial interest. [0006] 6. System, according to claim 5, characterized by the fact that financial interest includes at least one of goods and intangible interest. [0007] 7. System, according to claim 5, characterized by the fact that financial interest includes at least one among shares, bonds, and options. [0008] 8. System according to any one of claims 5 to 7, characterized by the fact that at least one processor is further configured to execute instructions to induce the system to: associate with each of at least one of the plurality of data from data processing segments representing at least one quantity term, the at least one quantity term representing at least an amount of a financial interest to be negotiated according to a request represented by each of the at least one segment data processing, and at least one corresponding price term associated with each such quantity term, the quantity term representing at least one proposed price at which a trade represented by at least one data processing segment is to be executed ; o at least one quantity term greater than at least one amount of financial interest offered publicly at a price equivalent to the corresponding associated price term, in a market associated with the network execution processor (s) for the ( s) which at least one data processing segment is to be routed. [0009] 9. System according to claim 8, characterized by the fact that the at least one quantity term is determined based at least in part on a history of trading associated with the market associated with the network execution processor to which the data processing segment is to be routed. [0010] 10. System according to any one of claims 1 to 9, characterized by the fact that at least one processor is configured to execute machine-interpretable instructions and induce the system to: monitor the execution of data processing requests by each one of the plurality of network computing resources (106, 1106); determine at least one time regulation parameter associated with a latency in the execution of data processing requests between the system and each of the plurality of network computing resources (106, 1106); and storing at least one time setting parameter in machine-readable memory accessible by at least one processor. [0011] 11. System, according to claim 10, characterized by the fact that at least one latency is associated with at least one among: communication delay and processing delay. [0012] 12. System, according to claim 10 or 11, characterized by the fact that the execution of signal processing requests is monitored periodically. [0013] 13. System according to claim 10 or 11, characterized by the fact that the execution of signal processing execution requests is monitored continuously. [0014] 14. Method performed by at least one processor that executes machine-interpretable instructions, the method characterized by the fact that it comprises the steps of: associating, with signals representing instructions for executing a plurality of data processing segments, each data processing segment representing instructions for executing a respective portion of a data process executable by a plurality of network computing resources (106, 1106), the data processing representing a plurality of transactions proposed at one or more interest financial, at least one timing parameter determined at least in part to use one or more latencies associated with execution of signal processing requests by at least one of the network computing resources (106, 1106); and according to a timing sequence using at least one associated timing parameter, route the signals representing instructions for executing the plurality of portions of the plurality of transactions proposed for the plurality of network computing resources (106 , 1106); o at least one associated time regulation parameter determined to effect the arrival or synchronized execution of instructions for executing the plurality of portions of the plurality of transactions proposed in the plurality of network computing resources (106, 1106). [0015] 15. Method according to claim 14, characterized by the fact that at least one time regulation parameter is determined based at least in part on the dynamically monitored latency in the execution of routed signal processing requests for at least one of the plurality of network computing resources (106, 1106). [0016] 16. Method according to claim 14 or 15, characterized by the fact that the at least one time regulation parameter is determined based at least in part on statistical latency in the execution of signal processing requests routed to at least one of the plurality of network computing resources (106, 1106). [0017] 17. Method according to any one of claims 14 to 16, characterized in that the at least one time regulation parameter is determined based at least in part on the historical latency in the execution of signal processing requests routed to at least one of the plurality of network computing resources (106, 1106). [0018] 18. Method according to any of claims 14 to 17, characterized by the fact that the at least one time regulation parameter is determined based at least in part on the predictive latency in the execution of signal processing requests routed to at least one of the plurality of network computing resources (106, 1106). [0019] 19. Method according to any of claims 14 to 18, characterized in that the at least one time regulation parameter is determined so that the synchronized arrival or execution is simultaneous. [0020] 20. Method according to any one of claims 14 to 18, characterized in that the at least one time regulation parameter is determined so that the synchronized arrival or execution is according to a non-simultaneous sequence. [0021] 21. Method according to any one of claims 14 to 20, characterized in that the at least one time regulation parameter is determined so that the synchronized arrival or execution is according to a determined relative time regulation. [0022] 22. Method according to any one of claims 14 to 21, characterized in that the at least one time regulation parameter is determined based at least in part on at least one of: communication delay or processing delay . [0023] 23. Method according to any one of claims 14 to 22, characterized in that the at least one time regulation parameter is determined based at least in part on a latency probability model. [0024] 24. Method, according to any of claims 14 to 23, characterized by the fact that financial interest includes at least one among commodities and monetary interest. [0025] 25. Method according to any one of claims 14 to 24, characterized by the fact that financial interest includes at least one among equity interests, non-equity interests or derivatives thereof. [0026] 26. Method according to any one of claims 14 to 25, characterized by the fact that it comprises: generating the signals representing instructions for executing the plurality of data processing segments; and based at least in part on the latencies in the execution of previous data processing requests routed to each of the plurality of network computing resources (106, 1106), determining a plurality of time regulation parameters, each of the plurality of timing parameters to associate with a correspondent of the plurality of data processing segments. [0027] 27. Method according to any of claims 14 to 26, characterized in that it comprises: receiving from one or more signals from source data representing instructions for executing at least one data process executable by a plurality network computing resources (106, 1106); and dividing at least one data process into the plurality of data processing segments, each data processing segment to be routed to a different one from a plurality of networked processors. [0028] 28. Method according to any of claims 14 to 27, characterized in that the method further comprises: associating with each of the plurality of data from data processing segments representing at least one quantity term, the at least a quantity term representing at least one amount of a financial interest to be negotiated according to a request represented by the corresponding data processing segment, and at least one corresponding price term, the price term representing at least one price proposed in which a negotiation represented by at least one data processing segment is to be performed. [0029] 29. Method according to claim 28, characterized by the fact that at least one quantity term is greater than at least one amount of financial interest advertising offered at a price equivalent to the associated associated price term. respondent, in a market associated with the network execution processor (s) to which at least one data processing segment is to be routed. [0030] 30. Device characterized by the fact that it comprises at least one processor configured to carry out the method as defined in any one of claims 14 to 29. [0031] 31. Computer-readable medium characterized by the fact that it comprises non-transient machine-readable programming structures configured to induce at least one processor to execute the method as defined in any of claims 14 to 29.
类似技术:
公开号 | 公开日 | 专利标题 BR112012013891B1|2020-12-08|SYSTEM FOR PERFORMING SYNCHRONIZED DATA PROCESSING THROUGH MULTIPLE NETWORK COMPUTING RESOURCES, METHOD, DEVICE AND MEDIA LEGIBLE BY COMPUTER Easley et al.2016|Discerning information from trade data US20200357065A1|2020-11-12|Synchronized processing of data by networked computing resources US10664174B2|2020-05-26|Resource allocation based on transaction processor classification JP2018514105A|2018-05-31|Coordinated processing of data with networked computing resources CA2913700A1|2016-06-09|Synchronized processing of data by networked computing resources US20120143739A1|2012-06-07|Systems and methods for calculating an informed trading metric and applications thereof EP3582112B1|2021-09-08|Optimized data structure
同族专利:
公开号 | 公开日 MX2012006659A|2013-01-22| CN105978756A|2016-09-28| KR101667697B1|2016-10-19| BR112012013891A2|2016-05-03| CN105978756B|2019-08-16| CA3109739A1|2011-01-11| WO2011069234A1|2011-06-16| EP2510451A4|2015-11-18| US20130304626A1|2013-11-14| AU2016200212A1|2016-02-04| CA2927532A1|2011-01-11| ZA201205093B|2014-02-26| CN102859938A|2013-01-02| US10706469B2|2020-07-07| EP2510451A1|2012-10-17| US20160260173A1|2016-09-08| CN102859938B|2016-07-06| SG10201704581VA|2017-07-28| MX337624B|2016-03-11| AU2018274909B2|2020-11-12| CA2927532C|2016-08-09| CA2707196C|2016-11-01| SG181616A1|2012-07-30| JP2015092353A|2015-05-14| AU2016200212B2|2018-03-01| AU2021200879A1|2021-03-04| EP2510451B1|2019-08-28| US8984137B2|2015-03-17| AU2010330629B2|2015-11-05| AU2010330629A1|2012-02-23| US8489747B2|2013-07-16| US20200302536A1|2020-09-24| JP5785556B2|2015-09-30| CA2927607A1|2011-01-11| US10650450B2|2020-05-12| AU2016231624A1|2016-10-20| CA2707196A1|2011-01-11| AU2018274909A1|2019-01-03| CA2927607C|2021-04-06| KR20120101535A|2012-09-13| US20120042080A1|2012-02-16| US20100332650A1|2010-12-30| US20170039648A1|2017-02-09| ZA201309197B|2016-02-24| ES2754099T3|2020-04-15| JP2013513171A|2013-04-18|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP0353819B1|1988-08-02|1997-04-09|Koninklijke Philips Electronics N.V.|Method and apparatus for synchronizing parallel processors using a fuzzy barrier| US5339415A|1990-06-11|1994-08-16|Cray Research, Inc.|Dual level scheduling of processes to multiple parallel regions of a multi-threaded program on a tightly coupled multiprocessor computer system| US5179702A|1989-12-29|1993-01-12|Supercomputer Systems Limited Partnership|System and method for controlling a highly parallel multiprocessor using an anarchy based scheduler for parallel execution thread scheduling| JPH0448350A|1990-06-18|1992-02-18|Toshiba Corp|Data base control system| JP3285629B2|1992-12-18|2002-05-27|富士通株式会社|Synchronous processing method and synchronous processing device| CA2123447C|1993-09-20|1999-02-16|Richard L. Arndt|Scalable system interrupt structure for a multiprocessing system| US5768594A|1995-07-14|1998-06-16|Lucent Technologies Inc.|Methods and means for scheduling parallel processors| JP3417741B2|1995-10-06|2003-06-16|富士通株式会社|Transaction control system| US5887143A|1995-10-26|1999-03-23|Hitachi, Ltd.|Apparatus and method for synchronizing execution of programs in a distributed real-time computing system| US5820463A|1996-02-06|1998-10-13|Bell Atlantic Network Services, Inc.|Method and apparatus for multi-player gaming over a network| US5896523A|1997-06-04|1999-04-20|Marathon Technologies Corporation|Loosely-coupled, synchronized execution| US9197599B1|1997-09-26|2015-11-24|Verizon Patent And Licensing Inc.|Integrated business system for web based telecommunications management| US6081906A|1997-11-21|2000-06-27|Fuji Xerox Co., Ltd.|Multi-thread processing with queuing and recovery| US20010051910A1|1998-09-25|2001-12-13|Snelgrove William Martin|System and method for conducting an auction over a communications network| US6618707B1|1998-11-03|2003-09-09|International Securities Exchange, Inc.|Automated exchange for trading derivative securities| US20020026321A1|1999-02-26|2002-02-28|Sadeg M. Faris|Internet-based system and method for fairly and securely enabling timed-constrained competition using globally time-sychronized client subsystems and information servers having microsecond client-event resolution| US6677858B1|1999-02-26|2004-01-13|Reveo, Inc.|Internet-based method of and system for monitoring space-time coordinate information and biophysiological state information collected from an animate object along a course through the space-time continuum| JP3557947B2|1999-05-24|2004-08-25|日本電気株式会社|Method and apparatus for simultaneously starting thread execution by a plurality of processors and computer-readable recording medium| US7277861B1|1999-08-27|2007-10-02|Westport Insurance Corporation|Insurance policy renewal method and system| US6810411B1|1999-09-13|2004-10-26|Intel Corporation|Method and system for selecting a host in a communications network| US7630986B1|1999-10-27|2009-12-08|Pinpoint, Incorporated|Secure data interchange| US7454457B1|2000-02-07|2008-11-18|Parallel Networks, Llc|Method and apparatus for dynamic data flow control using prioritization of data requests| US7707245B2|2000-02-22|2010-04-27|Harvey Lunenfeld|Metasearching a client's request for displaying different order books on the client| US6742044B1|2000-05-10|2004-05-25|Cisco Technology, Inc.|Distributed network traffic load balancing technique implemented without gateway router| US6789125B1|2000-05-10|2004-09-07|Cisco Technology, Inc.|Distributed network traffic load balancing technique implemented without gateway router| US7680715B2|2000-06-01|2010-03-16|Pipeline Financial Group, Inc.|Systems and methods for providing anonymous requests for quotes for financial instruments| US6829589B1|2000-07-21|2004-12-07|Stc, Llc|Method and apparatus for stock and index option price improvement, participation, and internalization| US7213077B2|2000-07-21|2007-05-01|Hughes Network Systems, Inc.|Method and system for providing buffer management in a performance enhancing proxy architecture| US7031945B1|2000-07-24|2006-04-18|Donner Irah H|System and method for reallocating and/or upgrading and/or rewarding tickets, other event admittance means, goods and/or services| US6856970B1|2000-09-26|2005-02-15|Bottomline Technologies|Electronic financial transaction system| NZ526323A|2000-11-15|2005-04-29|Spc Holdings Pty Ltd|Collaborative commerce hub| US7184984B2|2000-11-17|2007-02-27|Valaquenta Intellectual Properties Limited|Global electronic trading system| US6809733B2|2000-11-27|2004-10-26|Silicon Graphics, Inc.|Swap buffer synchronization in a distributed rendering system| GB2377518B|2001-02-12|2003-10-22|Altio Ltd|Client software enabling a client to run a network based application| WO2002069174A1|2001-02-28|2002-09-06|Fujitsu Limited|Method for executing parallel process, and multi-processor computer| JP3702813B2|2001-07-12|2005-10-05|日本電気株式会社|Multi-thread execution method and parallel processor system| EP1292109A1|2001-08-27|2003-03-12|Ricoh Company, Ltd.|Information processing system| JP3815278B2|2001-08-30|2006-08-30|ソニー株式会社|Network game system, network game server device, network game terminal device, information processing method, and information processing program| US8473922B2|2001-09-19|2013-06-25|Hewlett-Packard Development Company, L.P.|Runtime monitoring in component-based systems| US7565313B2|2001-12-05|2009-07-21|Pipeline Financial Group, Inc.|Method and system for managing distributed trading data| US20030151619A1|2002-01-22|2003-08-14|Mcbride Edmund Joseph|System for analyzing network load and other characteristics of an executable application| EP1497726A2|2002-01-24|2005-01-19|Koninklijke Philips Electronics N.V.|Executing processes in a multiprocessing environment| US7171479B2|2002-04-26|2007-01-30|International Business Machines Corporation|Efficient delivery of boot code images from a network server| US7114171B2|2002-05-14|2006-09-26|Thales Avionics, Inc.|Method for controlling an in-flight entertainment system| EP1546954A2|2002-05-14|2005-06-29|Bank of New York|Commission management system| US7747729B2|2002-06-14|2010-06-29|Hanoch Levy|Determining client latencies over a network| US6721765B2|2002-07-02|2004-04-13|Sybase, Inc.|Database system with improved methods for asynchronous logging of transactions| EP1391820A3|2002-07-31|2007-12-19|Texas Instruments Incorporated|Concurrent task execution in a multi-processor, single operating system environment| EP1414187A1|2002-10-22|2004-04-28|Alcatel|Method for evaluating the quality of service of a telecommunication link via a network| US7778896B2|2002-12-03|2010-08-17|Bny Convergex Group Llc|Systems and methods for direct electronic trading of depositary receipts| US9032465B2|2002-12-10|2015-05-12|Ol2, Inc.|Method for multicasting views of real-time streaming interactive video| US7552445B2|2002-12-13|2009-06-23|Savvis Communications Corporation|Systems and methods for monitoring events from multiple brokers| JP3747908B2|2002-12-13|2006-02-22|ソニー株式会社|Camera control system, camera server and control method thereof| US7290168B1|2003-02-28|2007-10-30|Sun Microsystems, Inc.|Systems and methods for providing a multi-path network switch system| JP4028444B2|2003-06-27|2007-12-26|株式会社東芝|Scheduling method and real-time processing system| CN1515472A|2003-08-26|2004-07-28|王金水|Insulated food container| US7373548B2|2003-08-29|2008-05-13|Intel Corporation|Hardware recovery in a multi-threaded architecture| US8655755B2|2003-10-22|2014-02-18|Scottrade, Inc.|System and method for the automated brokerage of financial instruments| US7447775B1|2003-11-07|2008-11-04|Cisco Technology, Inc.|Methods and apparatus for supporting transmission of streaming data| US20050125329A1|2003-12-09|2005-06-09|Nucenz Technologies, Inc.|Systems and methods for processing multiple contingent transactions| US7502912B2|2003-12-30|2009-03-10|Intel Corporation|Method and apparatus for rescheduling operations in a processor| JP5158928B2|2004-04-27|2013-03-06|楽天株式会社|Server apparatus, server apparatus control method, and program| US20060047591A1|2004-08-26|2006-03-02|Snouffer Bradley D|Method of capitalizing a distributed business entity and allocating profits thereof| US7890735B2|2004-08-30|2011-02-15|Texas Instruments Incorporated|Multi-threading processors, integrated circuit devices, systems, and processes of operation and manufacture| US7818236B2|2004-09-13|2010-10-19|Nyfix, Inc.|System for aggregating executions in a communication network for securities transactions and the like| US8145908B1|2004-10-29|2012-03-27|Akamai Technologies, Inc.|Web content defacement protection system| US8140423B2|2004-12-10|2012-03-20|Nyfix, Inc.|Controlling an order slicer for trading a financial instrument| US7487125B2|2005-01-14|2009-02-03|Littlewood Margaret G|Method for providing aggregation of trading on multiple alternative trading systems| US7788163B2|2005-03-11|2010-08-31|Chicago Mercantile Exchange Inc.|System and method of utilizing a distributed order book in an electronic trade match engine| US20060218071A1|2005-03-28|2006-09-28|Espeed, Inc.|System and method for managing trading between related entities| JP4514648B2|2005-05-18|2010-07-28|富士通株式会社|Information processing method and router by management server| US8566213B2|2005-05-20|2013-10-22|Bgc Partners, Inc.|System and method for automatically distributing a trading order over a range of prices| US20080037532A1|2005-08-20|2008-02-14|Sykes Edward A|Managing service levels on a shared network| US9058406B2|2005-09-14|2015-06-16|Millennial Media, Inc.|Management of multiple advertising inventories using a monetization platform| US7693873B2|2005-10-13|2010-04-06|International Business Machines Corporation|System, method and program to synchronize files in distributed computer system| US20070156786A1|2005-12-22|2007-07-05|International Business Machines Corporation|Method and apparatus for managing event logs for processes in a digital data processing system| US8458745B2|2006-02-17|2013-06-04|The Directv Group, Inc.|Amalgamation of user data for geographical trending| JP2007241394A|2006-03-06|2007-09-20|Mitsubishi Electric Corp|Division processing management device, division processing management system, arithmetic processing execution system and division processing management method| US8190682B2|2006-03-31|2012-05-29|Amazon Technologies, Inc.|Managing execution of programs by multiple computing systems| US8307119B2|2006-03-31|2012-11-06|Google Inc.|Collaborative online spreadsheet application| US7755621B2|2006-06-16|2010-07-13|Autodesk, Inc.|Fast interactive object manipulation| US7921046B2|2006-06-19|2011-04-05|Exegy Incorporated|High speed processing of financial information using FPGA devices| US7840482B2|2006-06-19|2010-11-23|Exegy Incorporated|Method and system for high speed options pricing| GB0613275D0|2006-07-04|2006-08-16|Codeplay Software Ltd|Distributed computer system| US7716118B2|2007-01-16|2010-05-11|Peter Bartko|System and method for providing latency protection for trading orders| US7970891B1|2007-01-17|2011-06-28|Google Inc.|Tracking links in web browsers| US20080294332A1|2007-01-17|2008-11-27|3-D-V-U Israel Ltd.|Method for Image Based Navigation Route Corridor For 3D View on Mobile Platforms for Mobile Users| US8301790B2|2007-05-30|2012-10-30|Randy Morrison|Synchronization of audio and video signals from remote sources over the internet| US7840481B2|2007-06-07|2010-11-23|Bny Convergex Execution Solutions Llc|Aged transactions in a trading system| JP5122890B2|2007-09-06|2013-01-16|株式会社日立製作所|Communication system and apparatus| US8326970B2|2007-11-05|2012-12-04|Hewlett-Packard Development Company, L.P.|System and method for modeling a session-based system with a transaction-based analytic model| EP2130121A1|2007-12-03|2009-12-09|Zircon Computing LLC|Parallel processing system| US8583515B2|2007-12-21|2013-11-12|Metabank|Transfer account systems, computer program products, and associated computer-implemented methods| CN101256660A|2008-04-02|2008-09-03|刘波涌|Interactive system and method based on GPS technique| US7979344B2|2008-05-23|2011-07-12|Bny Convergex Group, Llc|Systems, methods, and media for automatically controlling trade executions based on percentage of volume trading rates| US8473469B1|2008-08-25|2013-06-25|Salesforce.Com, Inc.|Techniques for implementing batch processing in a multi-tenant on-demand database system| US8291252B2|2008-08-27|2012-10-16|Igt|Power management in a multi-station gaming machine| US8127001B1|2008-10-06|2012-02-28|Rockstar Bidco, LP|Method and system for selecting providers for role based services| US8869256B2|2008-10-21|2014-10-21|Yahoo! Inc.|Network aggregator| WO2010077829A1|2008-12-15|2010-07-08|Exegy Incorporated|Method and apparatus for high-speed processing of financial market depth data| US8270594B2|2009-01-08|2012-09-18|Soundbite Communications, Inc.|Method and system for managing interactive communications campaign with reduced customer-to-agent connection latency| US8682777B1|2009-02-02|2014-03-25|Marketaxess Holdings, Inc.|Methods and systems for computer-based trading enhanced with market and historical data displayed on live screen| US20100332373A1|2009-02-26|2010-12-30|Jason Crabtree|System and method for participation in energy-related markets| US8103769B1|2009-09-03|2012-01-24|Amazon Technologies, Inc.|Dynamic isolation of shared resources| US9804943B2|2009-10-16|2017-10-31|Sap Se|Estimating service resource consumption based on response time| EP2510451B1|2009-12-10|2019-08-28|Royal Bank Of Canada|Synchronized processing of data by networked computing resources| US9081501B2|2010-01-08|2015-07-14|International Business Machines Corporation|Multi-petascale highly efficient parallel supercomputer| US8484661B2|2010-03-19|2013-07-09|At&T Mobility Ii Llc|Agnostic execution cluster for an agnostic execution environment| US8358765B1|2010-03-31|2013-01-22|Cox Communications, Inc.|System for simultaneous delivery of communication session invitation messages to communication devices| US8315940B2|2010-04-27|2012-11-20|Omx Technology Ab|System and method for rapidly calculating risk in an electronic trading exchange| WO2011146898A2|2010-05-21|2011-11-24|Bologh Mark J|Internet system for ultra high video quality| US8635133B2|2010-05-28|2014-01-21|Massachusetts Institute Of Technology|System and method for relativistic statistical securities trading| WO2012008915A1|2010-07-13|2012-01-19|M-Daq Pte Ltd|Method and system of trading a security in a foreign currency| US8738798B2|2010-08-06|2014-05-27|Acquire Media Ventures, Inc.|Method and system for pacing, ack'ing, timing, and handicapping for simultaneous receipt of documents| US8527440B2|2010-10-12|2013-09-03|Sap Ag|System and apparatus for performing consistency maintenance of distributed graph structures that compares changes to identify a conflicting operation| US8336051B2|2010-11-04|2012-12-18|Electron Database Corporation|Systems and methods for grouped request execution| US8843545B2|2010-11-30|2014-09-23|Telefonaktiebolaget L M Ericsson |Supervision timer control mechanisms| JP6045505B2|2010-12-09|2016-12-14|アイピー レザボア,エルエルシー.IP Reservoir, LLC.|Method and apparatus for managing orders in a financial market| US8543868B2|2010-12-21|2013-09-24|Guest Tek Interactive Entertainment Ltd.|Distributed computing system that monitors client device request time and server servicing time in order to detect performance problems and automatically issue alerts| KR101803303B1|2010-12-21|2017-12-29|삼성전자주식회사|Method for multimedia architecture pattern determination, method and apparatus for transformation from single-core based architecture to multi-core based architecture and| EP2659620B1|2010-12-29|2018-10-17|Citrix Systems Inc.|Systems and methods for scalable n-core statistics aggregation| US20120221546A1|2011-02-24|2012-08-30|Rafsky Lawrence C|Method and system for facilitating web content aggregation initiated by a client or server| US8838584B2|2011-03-29|2014-09-16|Acquire Media Ventures, Inc.|Method for selecting a subset of content sources from a collection of content sources| US8667019B2|2011-04-01|2014-03-04|Microsoft Corporation|Placement goal-based database instance consolidation| US8824687B2|2011-05-04|2014-09-02|Acquire Media Ventures, Inc.|Method and system for pacing, acking, timing, and handicapping for simultaneous receipt of documents employing encryption| US9210217B2|2012-03-10|2015-12-08|Headwater Partners Ii Llc|Content broker that offers preloading opportunities| US9503510B2|2012-03-10|2016-11-22|Headwater Partners Ii Llc|Content distribution based on a value metric| US8891364B2|2012-06-15|2014-11-18|Citrix Systems, Inc.|Systems and methods for distributing traffic across cluster nodes| US9467383B2|2012-06-19|2016-10-11|Hewlett Packard Enterprise Development Lp|Iterative optimization method for site selection in global load balance| SG10201706072PA|2012-09-12|2017-09-28|Iex Group Inc|Transmission latency leveling apparatuses, methods and systems| US9628345B2|2012-12-13|2017-04-18|Level 3 Communications, Llc|Framework supporting content delivery with collector services network| US9128781B2|2012-12-28|2015-09-08|Intel Corporation|Processor with memory race recorder to record thread interleavings in multi-threaded software| US9734535B2|2013-03-15|2017-08-15|Trading Technologies International, Inc.|Charting multiple markets| US9552242B1|2013-09-25|2017-01-24|Amazon Technologies, Inc.|Log-structured distributed storage using a single log sequence number space| US9691102B2|2013-11-07|2017-06-27|Chicago Mercantile Exchange Inc.|Transactionally deterministic high speed financial exchange having improved, efficiency, communication, customization, performance, access, trading opportunities, credit controls, and fault tolerance| US9560312B2|2013-12-11|2017-01-31|Cellco Partnership|Time synchronization of video and data inside a mobile device| US9552582B2|2014-03-21|2017-01-24|Ca, Inc.|Controlling ecommerce authentication with non-linear analytical models| EP3140735A1|2014-05-08|2017-03-15|TSX Inc.|System and method for running application processes| US10621666B2|2014-09-17|2020-04-14|Iex Group, Inc.|System and method for facilitation cross orders| JP6378057B2|2014-11-13|2018-08-22|株式会社東芝|Connection control device, connection control method, connection control system, and computer program| US20160217526A1|2015-01-26|2016-07-28|Trading Technologies International Inc.|Methods and Systems for the Calculation and Presentation of Time Series Study Information| US20160225085A1|2015-01-30|2016-08-04|Trading Technologies International, Inc.|System and Method for Implementing a Dynamic Simulation System| CN108027800A|2015-07-22|2018-05-11|动态网络服务股份有限公司|The mthods, systems and devices for carrying out geo-location are route using tracking| US9952932B2|2015-11-02|2018-04-24|Chicago Mercantile Exchange Inc.|Clustered fault tolerance systems and methods using load-based failover| US10269075B2|2016-02-02|2019-04-23|Allstate Insurance Company|Subjective route risk mapping and mitigation| US10943297B2|2016-08-09|2021-03-09|Chicago Mercantile Exchange Inc.|Systems and methods for coordinating processing of instructions across multiple components| US20180357661A1|2017-06-13|2018-12-13|Facebook, Inc.|Generating analytics for a content item presented to individuals by one or more content publishers based on attributes extrapolated from online system users| US10721134B2|2017-08-30|2020-07-21|Citrix Systems, Inc.|Inferring radio type from clustering algorithms| US10740832B2|2017-11-16|2020-08-11|Coupa Software Incorporated|Computer-implemented method and systems for using transaction data to generate optimized event templates based on a requested event type| US10972387B2|2018-06-16|2021-04-06|Versa Networks, Inc.|Application performance based path-selection using dynamic metrics|US7930393B1|2008-09-29|2011-04-19|Amazon Technologies, Inc.|Monitoring domain allocation performance| US7865594B1|2008-09-29|2011-01-04|Amazon Technologies, Inc.|Managing resources consolidation configurations| US8122124B1|2008-09-29|2012-02-21|Amazon Technologies, Inc.|Monitoring performance and operation of data exchanges| US8286176B1|2008-09-29|2012-10-09|Amazon Technologies, Inc.|Optimizing resource configurations| US8316124B1|2008-09-29|2012-11-20|Amazon Technologies, Inc.|Managing network data display| US8117306B1|2008-09-29|2012-02-14|Amazon Technologies, Inc.|Optimizing content management| US9230259B1|2009-03-20|2016-01-05|Jpmorgan Chase Bank, N.A.|Systems and methods for mobile ordering and payment| US7917618B1|2009-03-24|2011-03-29|Amazon Technologies, Inc.|Monitoring web site content| US9940670B2|2009-12-10|2018-04-10|Royal Bank Of Canada|Synchronized processing of data by networked computing resources| EP2510451B1|2009-12-10|2019-08-28|Royal Bank Of Canada|Synchronized processing of data by networked computing resources| US10057333B2|2009-12-10|2018-08-21|Royal Bank Of Canada|Coordinated processing of data by networked computing resources| US9979589B2|2009-12-10|2018-05-22|Royal Bank Of Canada|Coordinated processing of data by networked computing resources| US9959572B2|2009-12-10|2018-05-01|Royal Bank Of Canada|Coordinated processing of data by networked computing resources| US8930525B2|2010-04-27|2015-01-06|Hewlett-Packard Development Company, L.P.|Method and apparatus for measuring business transaction performance| US8832061B2|2010-07-02|2014-09-09|Salesforce.Com, Inc.|Optimizing data synchronization between mobile clients and database systems| US10263888B2|2010-09-30|2019-04-16|Trading Technologies International, Inc.|Sticky order routers| JP5701708B2|2011-07-26|2015-04-15|株式会社日立製作所|Communications system| SG10201706072PA|2012-09-12|2017-09-28|Iex Group Inc|Transmission latency leveling apparatuses, methods and systems| CA3012609A1|2013-06-24|2014-12-31|Aequitas Innovations Inc.|System and method for automated trading of financial interests| CA2835860A1|2012-12-11|2014-06-11|Aequitas Innovations Inc.|System and method for automated issuing, listing and trading of financial interests| GB2512061A|2013-03-18|2014-09-24|Rapid Addition Ltd|Transactional message format data conversion| US9922101B1|2013-06-28|2018-03-20|Emc Corporation|Coordinated configuration, management, and access across multiple data stores| US10002348B1|2013-07-24|2018-06-19|Amazon Technologies, Inc.|Routing and processing of payment transactions| CA2946122A1|2014-04-16|2015-10-22|Iex Group, Inc.|Systems and methods for providing up-to-date information for transactions| US11157998B2|2014-08-04|2021-10-26|Renaissance Technologies Llc|System and method for executing synchronized trades in multiple exchanges| SG11201701415XA|2014-08-22|2017-03-30|Iex Group Inc|Dynamic peg orders in an electronic trading system| US10311515B2|2014-09-17|2019-06-04|Iex Group, Inc.|System and method for a semi-lit market| CA2960047A1|2014-10-08|2016-04-14|Tsx Inc.|Selective delayed and undelayed database updating| WO2016061658A1|2014-10-20|2016-04-28|Tsx Inc.|Database updating with latency tolerance| US11165714B2|2014-12-15|2021-11-02|Royal Bank Of Canada|Verification of data processes in a network of computing resources| WO2016095012A1|2014-12-15|2016-06-23|Royal Bank Of Canada|Verification of data processes in a network of computing resources| US9769248B1|2014-12-16|2017-09-19|Amazon Technologies, Inc.|Performance-based content delivery| US9705769B1|2014-12-17|2017-07-11|Juniper Networks, Inc.|Service latency monitoring using two way active measurement protocol| US10311371B1|2014-12-19|2019-06-04|Amazon Technologies, Inc.|Machine learning based content delivery| US10311372B1|2014-12-19|2019-06-04|Amazon Technologies, Inc.|Machine learning based content delivery| US10225365B1|2014-12-19|2019-03-05|Amazon Technologies, Inc.|Machine learning based content delivery| SG10202110018RA|2015-02-27|2021-10-28|Royal Bank Of Canada|Coordinated processing of data by networked computing resources| US10225326B1|2015-03-23|2019-03-05|Amazon Technologies, Inc.|Point of presence based data uploading| US9774619B1|2015-09-24|2017-09-26|Amazon Technologies, Inc.|Mitigating network attacks| US10506074B2|2015-09-25|2019-12-10|Verizon Patent And Licensing Inc.|Providing simultaneous access to content in a network| US10817606B1|2015-09-30|2020-10-27|Fireeye, Inc.|Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic| US10419975B1|2015-12-11|2019-09-17|Spectranet, Inc.|Parallel multi-bit low latency wireless messaging| JP2019509580A|2016-03-24|2019-04-04|トラディション アメリカ,エルエルシー|System and method for live order processing| US10748210B2|2016-08-09|2020-08-18|Chicago Mercantile Exchange Inc.|Systems and methods for coordinating processing of scheduled instructions across multiple components| US10943297B2|2016-08-09|2021-03-09|Chicago Mercantile Exchange Inc.|Systems and methods for coordinating processing of instructions across multiple components| WO2018044334A1|2016-09-02|2018-03-08|Iex Group. Inc.|System and method for creating time-accurate event streams| US10706470B2|2016-12-02|2020-07-07|Iex Group, Inc.|Systems and methods for processing full or partially displayed dynamic peg orders in an electronic trading system| US11037243B2|2018-04-25|2021-06-15|Ubs Business Solutions Ag|Dynamic dissemination of information to network devices| US11037241B2|2018-04-25|2021-06-15|Ubs Business Solutions Ag|Dynamic dissemination of information to network devices| US10511520B1|2018-05-29|2019-12-17|Ripple Labs Inc.|Multi-hop path finding| US10795974B2|2018-05-31|2020-10-06|Microsoft Technology Licensing, Llc|Memory assignment for guest operating systems| US11184288B2|2019-01-11|2021-11-23|Arista Networks, Inc.|System and a method for controlling timing of processing network data|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: G06Q 40/04 (2012.01), H04L 12/26 (2006.01), H04L 1 | 2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-08-06| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure| 2020-08-25| B09A| Decision: intention to grant| 2020-12-08| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 08/12/2020, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US28537509P| true| 2009-12-10|2009-12-10| US61/285,375|2009-12-10| PCT/CA2010/000872|WO2011069234A1|2009-12-10|2010-06-08|Synchronized processing of data by networked computing resources| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|